Subject: CRYPTO-GRAM, December 15, 1998 Date: Wed, 16 Dec 1998 00:37:06 -0600 From: Bruce Schneier To: crypto-gram@chaparraltree.com CRYPTO-GRAM December 15, 1998 by Bruce Schneier President Counterpane Systems schneier@counterpane.com http://www.counterpane.com A free monthly newsletter providing summaries, analyses, insights, and commentaries on cryptography and computer security. Back issues are available at http://www.counterpane.com. To subscribe or unsubscribe, see below. Copyright (c) 1998 by Bruce Schneier ** *** ***** ******* *********** ************* In this issue: The Fallacy of Cracking Contests How to Recognize Plaintext News The Doghouse: Iomega Zip Disks Counterpane Systems News Final Report from the Commerce Department Technical Advisory Committee on Key Recovery Comments From Readers ** *** ***** ******* *********** ************* The Fallacy of Cracking Contests You see them all the time: "Company X offers $1,000,000 to anyone who can break through their firewall/crack their algorithm/make a fraudulent transaction using their protocol/do whatever." These are cracking contests, and they're supposed to show how strong and secure the target of the contests are. The logic goes something like this: We offered a prize to break the target, and no one did. This means that the target is secure. It doesn't. Contests are a terrible way to demonstrate security. A product/system/protocol/algorithm that has survived a contest unbroken is not obviously more trustworthy than one that has not been the subject of a contest. The best products/systems/protocols/algorithms available today have not been the subjects of any contests, and probably never will be. Contests generally don't produce useful data. There are three basic reasons why this is so. 1. The contests are generally unfair. Cryptanalysis assumes that the attacker knows everything except the secret. He has access to the algorithms and protocols, the source code, everything. He knows the ciphertext and the plaintext. He may even know something about the key. And a cryptanalytic result can be anything. It can be a complete break: a result that breaks the security in a reasonable amount of time. It can be a theoretical break: a result that doesn't work "operationally," but still shows that the security isn't as good as advertised. It can be anything in between. Most cryptanalysis contests have arbitrary rules. They define what the attacker has to work with, and how a successful break looks. Jaws Technologies provided a ciphertext file and, without explaining how their algorithm worked, offered a prize to anyone who could recover the plaintext. This isn't how real cryptanalysis works; if no one wins the contest, it means nothing. Most contests don't disclose the algorithm. And since most cryptanalysts don't have the skills for reverse-engineering (I find it tedious and boring), they never bother analyzing the systems. This is why COMP128, CMEA, ORYX, the Firewire cipher, the DVD cipher, and the Netscape PRNG were all broken within months of their disclosure (despite the fact that some of them have been widely deployed for many years); once the algorithm is revealed, it's easy to see the flaw, but it might take years before someone bothers to reverse-engineer the algorithm and publish it. Contests don't help. (Of course, the above paragraph does not hold true for the military. There are countless examples successful reverse-engineering--VENONA, PURPLE--in the "real" world. But the academic world doesn't work that way, fortunately or unfortunately.) Unfair contests aren't new. Back in the mid-1980s, the authors of an encryption algorithm called FEAL issued a contest. They provided a ciphertext file, and offered a prize to the first person to recover the plaintext. The algorithm has been repeatedly broken by cryptographers, through differential and then linear cryptanalysis and by other statistical attacks. Everyone agrees that the algorithm was badly flawed. Still, no one won the contest. 2. The analysis is not controlled. Contests are random tests. Do ten people, each working 100 hours to win the contest, count as 1000 hours of analysis? Or did they all try the same things? Are they even competent analysts, or are they just random people who heard about the contest and wanted to try their luck? Just because no one wins a contest doesn't mean the target is secure...it just means that no one won. 3. Contest prizes are rarely good incentives. Cryptanalysis of an algorithm, protocol, or system can be a lot of work. People who are good at it are going to do the work for a variety of reasons--money, prestige, boredom--but trying to win a contest is rarely one of them. Contests are viewed in the community with skepticism: most companies that sponsor contests are not known, and people don't believe that they will judge the results fairly. And trying to win a contest is no sure thing: someone could beat you, leaving you nothing to show for your efforts. Cryptanalysts are much better off analyzing systems where they are being paid for their analysis work, or systems for which they can publish a paper explaining their results. Just look at the economics. Taken at a conservative $125 an hour for a competent cryptanalyst, a $10K prize pays for two weeks of work, not enough time to even dig through the code. A $100K prize might be worth a look, but reverse-engineering the product is boring and that's still not enough time to do a thorough job. A prize of $1M starts to become interesting, but most companies can't afford to offer that. And the cryptanalyst has no guarantee of getting paid: he may not find anything, he may get beaten to the attack and lose out to someone else, or the company might not even pay. Why should a cryptanalyst donate his time (and good name) to the company's publicity campaign? Cryptanalysis contests are generally nothing more than a publicity tool. Sponsoring a contest, even a fair one, is no guarantee that people will analyze the target. Surviving a contest is no guarantee that there are no flaws in the target. The true measure of trustworthiness is how much analysis has been done, not whether there was a contest. And analysis is a slow and painful process. People trust cryptographic algorithms (DES, RSA), protocols (Kerberos), and systems (PGP, IPSec) not because of contests, but because all have been subjected to years (decades, even) of peer review and analysis. And they have been analyzed not because of some elusive prize, but because they were either interesting or widely deployed. The analysis of the fifteen AES candidates is going to take several years. There isn't a prize in the world that's going to make the best cryptanalysts drop what they're doing and examine the offerings of Meganet Corporation or RPK Security Inc., two companies that recently offered cracking prizes. It's much more interesting to find flaws in Java, or Windows NT, or cellular telephone security. The above three reasons are generalizations. There are exceptions, but they are few and far between. The RSA challenges, both their factoring challenges and their symmetric brute-force challenges, are fair and good contests. These contests are successful not because the prize money is an incentive to factor numbers or build brute-force cracking machines, but because researchers are already interested in factoring and brute-force cracking. The contests simply provide a spotlight for what was already an interesting endeavor. The AES contest, although more a competition than a cryptanalysis contest, is also fair Our Twofish cryptanalysis contest offers a $10K prize for the best negative comments on Twofish that aren't written by the authors. There are no arbitrary definitions of what a winning analysis is. There is no ciphertext to break or keys to recover. We are simply rewarding the most successful cryptanalysis research result, whatever it may be and however successful it is (or is not). Again, the contest is fair because 1) the algorithm is completely specified, 2) there are no arbitrary definition of what winning means, and 3) the algorithm is public domain. Contests, if implemented correctly, can provide useful information and reward particular areas of research. But they are not useful metrics to judge security. I can offer $10K to the first person who successfully breaks into my home and steals a book off my shelf. If no one does so before the contest ends, that doesn't mean my home is secure. Maybe no one with any burgling ability heard about my contest. Maybe they were too busy doing other things. Maybe they weren't able to break into my home, but they figured out how to forge the real-estate title to put the property in their name. Maybe they did break into my home, but took a look around and decided to come back when there was something more valuable than a $10,000 prize at stake. The contest proved nothing. Gene Spafford has written against hacking contests. http://www.itd.nrl.navy.mil/ITD/5540/ieee/cipher/old-issues/issue9602 Matt Blaze has too, but I can't find a good URL. ** *** ***** ******* *********** ************* How to Recognize Plaintext A brute-force cracking machine tries every possible key until it finds the right one. If the machine has a chunk of ciphertext and decrypts it with one key after the other, how does it know when it found the correct plaintext? It seems obvious to me, but I get this question often enough to address it in these pages. The machine knows that it found the plaintext because it looks like plaintext. Plaintext tends to look like plaintext. It's an English-language message, or a data file from a computer application (programs like Microsoft Word have large known headers; even PK-ZIP files have known headers), or a database in a reasonable format. When you look at a decrypted file, it looks like something understandable. When you look at a ciphertext file, or a file decrypted with the wrong key, it looks like gibberish. In the 1940s, Claude Shannon invented a concept called the unicity distance. Among other things, the unicity distance measures the amount of ciphertext required such that there is only one reasonable plaintext. This number depends both on the characteristics of the plaintext and the key length of the encryption algorithm. For example, RC4 encrypts data in bytes. Imagine a single ASCII letter as plaintext. There are 26 possible plaintexts out of 256 possible decryptions. Any random key, when used to decrypt the ciphertext, has a 26/256 chance of producing a valid plaintext. The analyst has no way to tell the wrong plaintext from the correct plaintext. Now imagine a 1K e-mail message. The analyst tries random keys, and eventually a plaintext emerges that looks like an e-mail message: words, phrases, sentences, grammar. The odds are infinitesimal that this is not the correct plaintext. Everything else is in the middle. The unicity distance determines when you can think like the second example instead of the first. For a standard English message, the unicity distance is K/6.8, where K is the key length. (The 6.8 is a measure of the redundancy of English in ASCII. For other plaintexts it will be more or less, but not that much more or less.) For DES, the unicity distance is 8.2 bytes. For 128-bit ciphers, it is about 19 bytes. This means that if you are trying to brute-force DES you need two ciphertext blocks. (DES's block length is 8 bytes.) Decrypt the first ciphertext block with one key after another. If the resulting plaintext looks like English, then decrypt the second block with the same key. If the second plaintext block also looks like English, you've found the correct key. The unicity distance grows as the redundancy of the plaintext shrinks. For compressed files, the redundancy might be 2.5, or three blocks of DES ciphertext. For a 256-bit-key cipher, that would be 105 plaintext bytes. If the plaintext is a random key, the redundancy is zero and the unicity distance reaches infinity: it is impossible to recognize the correct plaintext from an incorrect plaintext. But that's a special case. Most of the time, it is easy to recognize plaintext. ** *** ***** ******* *********** ************* News Okay, I finally got the story right about Network Associates Inc. and the Key Recovery Alliance. (Last month I pointed to a Wired News story that they quietly rejoined.) The story is wrong. They never left the KRA. Since its inception, Trusted Information Systems was a big mover and shaker in the KRA. When NAI bought TIS in May 1998, TIS's membership transferred to NAI. NAI resigned the leadership posts that TIS had held in the Alliance and stopped attending its meetings, but never left the KRA. So, NAI is a member of the KRA, and has been since it bought TIS. http://www.wired.com/news/print_version/technology/story/16219.html A federal judge has issued a temporary restraining order (TRO) against enforcement of the Child Online Protection Act (COPA). (That's the so-called CDA II.) This is a big deal, and a cause for celebration. The judge's decision is at: http://www.aclu.org/court/acluvrenoII_order.html The full text of the plaintiffs' brief is available at: http://www.epic.org/free_speech/copa/tro_brief.html To follow this story, subscribe to EPIC Alert: http://www.epic.org/alert/subscribe.html This is a fascinating web security hole: http://www.securexpert.com/framespoof/index.html Attorneys for Liquid Audio, a company promoting secure music distribution on the net, pressured someone to remove information on how to defeat copy protection. You'd think they still taught the First Amendment in law school. http://www.mp3.com/news/122.html A good C++ library: NTL provides data structures and algorithms for manipulating signed, arbitrary-length integers; and for vectors, matrices, and polynomials over the integers and finite fields. Version 3.1b has just been released. http://www.cs.wisc.edu/~shoup/ntl/ The Clinton Administration isn't satisfied with trying to destroy personal privacy in the U.S.; for years they've been taking their arguments abroad. Now, according to the administration, the 33 Wassenaar Arrangement signatories have agreed to implement the same bizarre export controls on mass market encryption software as the U.S. has. Now I don't know if this is true--the administration has lied before about the international reception of its policies--but if it is, it's a major step backwards. I'm not sure why the administration believes that making sure that sensitive information is encrypted poorly so that criminals can read it will help the world, but I'm sure they've got it figured out. http://www.nytimes.com/library/tech/98/12/biztech/articles/04encrypt.html http://jya.com/wass-au.htm http://jya.com/wass-de2.htm http://biz.yahoo.com/rf/981203/3l.html For the record, the Wassenaar Arrangement countries are: Argentina, Australia, Austria, Belgium, Bulgaria, Canada, Czech Republic, Denmark, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Japan, Republic of Korea, Luxembourg, The Netherlands, New Zealand, Norway, Poland, Portugal, Romania, Russia, Slovakia, Spain, Sweden, Switzerland, Turkey, Ukraine, United Kingdom, United States. Some of these countries are presently the major sources for the international distribution of cryptographic software. http://www.wassenaar.org/ Visit the Free Crypto website and send a message of support. http://www.freecrypto.org ** *** ***** ******* *********** ************* The Doghouse: Iomega Zip Disks The following instructions describe a hack around the read/write protection on IOMEGA ZIP-100 disks. This isn't my work; I received it anonymously. If anyone knows who discovered this, please have him or her contact me. The password protection feature involves having a password and security flag stored on the ZIP disk (in the boot sector?). This password and security status are read by the firmware in the ZIP drive and it refuses to allow access to a disk it believes to be read/write protected. The ZIP drive supports a "power down" feature that literally stops the drive spinning (and parks the drive's read/write head?) after 15 minutes of inactivity. Drives automatically restart as needed. The firmware does not currently (28 May 1998) notice a disk change via the manual disk eject hole on the back of the device. If, after spin-down, a known protected disk is manually ejected and a different protected disk is then inserted, the password for the first disk is still considered current by the firmware, and is thus valid for the second disk. The second disk can then be "unprotected" by using the password of the first disk. To implement this hack, perform the following steps: 1. You need a new blank ZIP-100 disk. Call this disk the KNOWN disk. 2. Insert the KNOWN disk into the ZIP drive. 3. Give the KNOWN disk a read/write password using the IOMEGA toolset. Remember the password. 4. Using the IOMEGA toolset startup preferences, set the SLEEP TIME of the ZIP drive to 1 minute. 5. Let the drive, with the KNOWN disk still inserted, spin down. (You can hear the obvious difference in noise output.) 6. Take an unfolded paper clip. 7. Poke it into the small hole at the back of the ZIP drive, and manually eject the KNOWN disk. (The hole is located at the back of the drive casing above the printer or second SCSI connector.) 8. Insert the UNKNOWN disk. 9. The drive may spin up to speed again or may remain silent. 10. Using the IOMEGA toolset, REMOVE the protection on this disk. Use the KNOWN PASSWORD from step (3) above. 11. Using the normal eject button on the front of the device, eject the UNKNOWN disk. 12. Reinsert the UNKNOWN disk a second time (to prepare for directory and file access). 13. Double-click on the disk drive icon in your Explorer/File Manager window. Done. ** *** ***** ******* *********** ************* Counterpane Systems News There is another Twofish technical report (#3) on the website. This one is called "Improved Twofish Implementations," and gives better performance numbers for 32-bit computers, smart cards, and hardware. http://www.counterpane.com/twofish-speed.html There is also a Twofish implementation in Delphi: http://www.hertreg.ac.uk/ss/d_crypto.html And finally, we have published a paper that compares the performance of all fifteen AES candidates on 32-bit processors, smart cards, and hardware. http://www.counterpane.com/AES-performance.html ** *** ***** ******* *********** ************* Final Report from the Commerce Department Technical Advisory Committee on Key Recovery The Technical Advisory Committee to Develop a Federal Information Processing Standard for the Federal Key Management Infrastructure was established by the Department of Commerce in July 1996. The Committee, which was formally chartered on July 24, 1996, held its first meeting on December 5-6, 1996. The goal, near as I can tell, was to get industry to agree on rules for key recovery. Its final meeting was held November 17-19, 1998. The meeting was more of a fizzle than a finale. Although the committee concluded all of its work, their final document continues to undergo editing and will continue to do so for the next several weeks. The document is far more coherent than it was at the end of their June "finale"; however, it has still not been thoroughly reviewed. Here are some points which may be of interest. Although it may be described as such, the committee's final document was hardly the product of "a 22-member commission." Participation dwindled to such an extent that members who were not present at the final meeting were asked to resign so that a quorum could be achieved. By the end of the final meeting, the fewer than half a dozen remaining participants were still making substantive changes to the document. Changes continued to be made by the chair and other individuals even after the conclusion of the committee's final meeting. The commission did not "conclude" anything. The committee never voted on or addressed any issues about whether key recovery made sense, where it could/should be deployed, or even if it is possible, safe, or makes economic sense. The commission did not make any recommendations. The work of the committee is a matter of public record and will presumably be continued by NIST. However, the committee did not recommend that its work go forward. The document produced by the committee does not give a blueprint for how to do key recovery. It lists over two hundred things that key recovery products should and should not do without ever saying how to do them. There is no reason to believe that the committee's document is consistent or complete. As an example, most of the document's 200+ requirements are contained in a section that was completely rewritten by a single member shortly before the final meeting. During the last day of the final meeting and before this section had been reviewed by what remained of the committee, this individual left saying that since the group had already dropped below quorum, his presence was no longer necessary. The section was later reviewed only superficially by the few remaining members. The document was written at least as much by NIST and NSA as by the committee. Although the only voting members of the committee were from the private sector, this was certainly not a product of (or at the initiative of) the private sector. You can read the document yourself at: http://csrc.nist.gov/tacdfipsfkmi/ ** *** ***** ******* *********** ************* Comments from Readers >>>From Chris Smith : You write about Canada's new crypto policy. However, the new policy is not, as you have described, about "export policy," but about domestic policy. This is about cryptography in Canada. My understanding is that Canada will continue to abide by the Wassenaar Arrangement, which currently limits the export of strong encryption. >>>From George Foot : Schneier wrote: "The electronic world moves too fast for this cycle. A serious flaw in an electronic commerce system could bankrupt a company in days. Today's systems must anticipate future attacks. Any successful electronic commerce system is likely to remain in use for ten years or more. It must be able to withstand the future: smarter attackers, more computational power, and greater incentives to subvert a widespread system. There won't be time to upgrade them in the field." I read every issue of the CRYPTO-GRAM with great interest. If I may as a simple subscriber be permitted an observation, I would say that there is a great need to learn from the warning in the above paragraph. But what principles should be adopted to guard against the danger which is predicted? My criterion is simplicity. Discard the complex algorithms which at every convolution open a crack through which their inner working may be examined exhaustively without limit of time. Choose instead a simple system which has unique parameters for every two stations desiring to intercommunicate and for every message passed, which publishes no keys and which has no association whatever with a third party. >>>From Illuminatus Primus : I see that the main reason automation poses a new risk is because the system increasingly depends on dumb (comparatively speaking) components, rather than humans. A small, isolated program will usually not recognize when cash flow trends suddenly change because small, isolated programs will not teach themselves how to recognize fraud. However, humans sometimes unreasonably expect them to. The cure is for more people to spend the time translating warning systems that humans are accustomed to into machine-friendly language. For example, Visa warns me every time a sharp change in my spending patterns is noticed. It's almost like I have a friendly banker keeping a watch over my money, but in reality it's just a few checks completed in a fraction of a second by a computer somewhere. Automation also brings a great benefit: new holes in the system can be squashed in very little time. Centralized systems benefit the moment that the hole is repaired, and decentralized systems benefit the moment that the fix propagates. I think a good argument could be made about how the current human-dependent systems might actually be worse off than the digital ones. For example, look at the problems involved with introducing new US currency to stop counterfeiting. Humans often times can take longer to adapt to a new situation than the time it takes to transmit a new piece of authentication code across the world millions of times. Perhaps there is a larger issue underneath all of this: the need for individual nodes to become more intelligent. If my credit card was actually a small terminal that allowed me to clear requests to charge to it, perhaps fraud against my card would be more difficult to accomplish. Perhaps if cars had meters at the gasoline intake, gas stations couldn't fraudulently overcharge people so easily. And perhaps, if merchants exchanged digital money instead of paper money, counterfeiting wouldn't take decades to stamp out. ** *** ***** ******* *********** ************* CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on cryptography and computer security. To subscribe, visit http://www.counterpane.com/crypto-gram.html or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit http://www.counterpane.com/unsubform.html. Back issues are available on http://www.counterpane.com. Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety. CRYPTO-GRAM is written by Bruce Schneier. Schneier is president of Counterpane Systems, the author of "Applied Cryptography," and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He served on the board of the International Association for Cryptologic Research, EPIC, and VTW. He is a frequent writer and lecturer on cryptography. Counterpane Systems is a six-person consulting firm specializing in cryptography and computer security. Counterpane provides expert consulting in: design and analysis, implementation and testing, threat modeling, product research and forecasting, classes and training, intellectual property, and export consulting. Contracts range from short-term design evaluations and expert opinions to multi-year development efforts. http://www.counterpane.com/ Copyright (c) 1998 by Bruce Schneier